573 research outputs found

    Prediction of Memory Retrieval Performance Using Ear-EEG Signals

    Full text link
    Many studies have explored brain signals during the performance of a memory task to predict later remembered items. However, prediction methods are still poorly used in real life and are not practical due to the use of electroencephalography (EEG) recorded from the scalp. Ear-EEG has been recently used to measure brain signals due to its flexibility when applying it to real world environments. In this study, we attempt to predict whether a shown stimulus is going to be remembered or forgotten using ear-EEG and compared its performance with scalp-EEG. Our results showed that there was no significant difference between ear-EEG and scalp-EEG. In addition, the higher prediction accuracy was obtained using a convolutional neural network (pre-stimulus: 74.06%, on-going stimulus: 69.53%) and it was compared to other baseline methods. These results showed that it is possible to predict performance of a memory task using ear-EEG signals and it could be used for predicting memory retrieval in a practical brain-computer interface.Comment: Accected for publication at EMBC 202

    End-to-End Automatic Sleep Stage Classification Using Spectral-Temporal Sleep Features

    Full text link
    Sleep disorder is one of many neurological diseases that can affect greatly the quality of daily life. It is very burdensome to manually classify the sleep stages to detect sleep disorders. Therefore, the automatic sleep stage classification techniques are needed. However, the previous automatic sleep scoring methods using raw signals are still low classification performance. In this study, we proposed an end-to-end automatic sleep staging framework based on optimal spectral-temporal sleep features using a sleep-edf dataset. The input data were modified using a bandpass filter and then applied to a convolutional neural network model. For five sleep stage classification, the classification performance 85.6% and 91.1% using the raw input data and the proposed input, respectively. This result also shows the highest performance compared to conventional studies using the same dataset. The proposed framework has shown high performance by using optimal features associated with each sleep stage, which may help to find new features in the automatic sleep stage method

    Assessment of Unconsciousness for Memory Consolidation Using EEG Signals

    Full text link
    The assessment of consciousness and unconsciousness is a challenging issue in modern neuroscience. Consciousness is closely related to memory consolidation in that memory is a critical component of conscious experience. So far, many studies have been reported on memory consolidation during consciousness, but there is little research on memory consolidation during unconsciousness. Therefore, we aim to assess the unconsciousness in terms of memory consolidation using electroencephalogram signals. In particular, we used unconscious state during a nap; because sleep is the only state in which consciousness disappears under normal physiological conditions. Seven participants performed two memory tasks (word-pairs and visuo-spatial) before and after the nap to assess the memory consolidation during unconsciousness. As a result, spindle power in central, parietal, occipital regions during unconsciousness was positively correlated with the performance of location memory. With the memory performance, there was also a negative correlation between delta connectivity and word-pairs memory, alpha connectivity and location memory, and spindle connectivity and word-pairs memory. We additionally observed the significant relationship between unconsciousness and brain changes during memory recall before and after the nap. These findings could help present new insights into the assessment of unconsciousness by exploring the relationship with memory consolidation.Comment: Submitted to IEEE International Conference on System, Man, and Cybernetics (IEEE SMC 2020

    Decoding Event-related Potential from Ear-EEG Signals based on Ensemble Convolutional Neural Networks in Ambulatory Environment

    Full text link
    Recently, practical brain-computer interface is actively carried out, especially, in an ambulatory environment. However, the electroencephalography (EEG) signals are distorted by movement artifacts and electromyography signals when users are moving, which make hard to recognize human intention. In addition, as hardware issues are also challenging, ear-EEG has been developed for practical brain-computer interface and has been widely used. In this paper, we proposed ensemble-based convolutional neural networks in ambulatory environment and analyzed the visual event-related potential responses in scalp- and ear-EEG in terms of statistical analysis and brain-computer interface performance. The brain-computer interface performance deteriorated as 3-14% when walking fast at 1.6 m/s. The proposed methods showed 0.728 in average of the area under the curve. The proposed method shows robust to the ambulatory environment and imbalanced data as well.Comment: Submitted IEEE the 9th International Winter Conference on Brain-Computer Interface. arXiv admin note: text overlap with arXiv:2002.0108

    Network of Evolvable Neural Units: Evolving to Learn at a Synaptic Level

    Full text link
    Although Deep Neural Networks have seen great success in recent years through various changes in overall architectures and optimization strategies, their fundamental underlying design remains largely unchanged. Computational neuroscience on the other hand provides more biologically realistic models of neural processing mechanisms, but they are still high level abstractions of the actual experimentally observed behaviour. Here a model is proposed that bridges Neuroscience, Machine Learning and Evolutionary Algorithms to evolve individual soma and synaptic compartment models of neurons in a scalable manner. Instead of attempting to manually derive models for all the observed complexity and diversity in neural processing, we propose an Evolvable Neural Unit (ENU) that can approximate the function of each individual neuron and synapse. We demonstrate that this type of unit can be evolved to mimic Integrate-And-Fire neurons and synaptic Spike-Timing-Dependent Plasticity. Additionally, by constructing a new type of neural network where each synapse and neuron is such an evolvable neural unit, we show it is possible to evolve an agent capable of learning to solve a T-maze environment task. This network independently discovers spiking dynamics and reinforcement type learning rules, opening up a new path towards biologically inspired artificial intelligence

    Prediction of Event Related Potential Speller Performance Using Resting-State EEG

    Full text link
    Event-related potential (ERP) speller can be utilized in device control and communication for locked-in or severely injured patients. However, problems such as inter-subject performance instability and ERP-illiteracy are still unresolved. Therefore, it is necessary to predict classification performance before performing an ERP speller in order to use it efficiently. In this study, we investigated the correlations with ERP speller performance using a resting-state before an ERP speller. In specific, we used spectral power and functional connectivity according to four brain regions and five frequency bands. As a result, the delta power in the frontal region and functional connectivity in the delta, alpha, gamma bands are significantly correlated with the ERP speller performance. Also, we predicted the ERP speller performance using EEG features in the resting-state. These findings may contribute to investigating the ERP-illiteracy and considering the appropriate alternatives for each user.Comment: Accepted to IEEE EMBC 202

    Decoding Visual Recognition of Objects from EEG Signals based on Attention-Driven Convolutional Neural Network

    Full text link
    The ability to perceive and recognize objects is fundamental for the interaction with the external environment. Studies that investigate them and their relationship with brain activity changes have been increasing due to the possible application in an intuitive brain-machine interface (BMI). In addition, the distinctive patterns when presenting different visual stimuli that make data differentiable enough to be classified have been studied. However, reported classification accuracy still low or employed techniques for obtaining brain signals are impractical to use in real environments. In this study, we aim to decode electroencephalography (EEG) signals depending on the provided visual stimulus. Subjects were presented with 72 photographs belonging to 6 different semantic categories. We classified 6 categories and 72 exemplars according to visual stimuli using EEG signals. In order to achieve a high classification accuracy, we proposed an attention driven convolutional neural network and compared our results with conventional methods used for classifying EEG signals. We reported an accuracy of 50.37% and 26.75% for 6-class and 72-class, respectively. These results statistically outperformed other conventional methods. This was possible because of the application of the attention network using human visual pathways. Our findings showed that EEG signals are possible to differentiate when subjects are presented with visual stimulus of different semantic categories and at an exemplar-level with a high classification accuracy; this demonstrates its viability to be applied it in a real-world BMI

    Spatio-Temporal Dynamics of Visual Imagery for Intuitive Brain-Computer Interface

    Full text link
    Visual imagery is an intuitive brain-computer interface paradigm, referring to the emergence of the visual scene. Despite its convenience, analysis of its intrinsic characteristics is limited. In this study, we demonstrate the effect of time interval and channel selection that affects the decoding performance of the multi-class visual imagery. We divided the epoch into time intervals of 0-1 s and 1-2 s and performed six-class classification in three different brain regions: whole brain, visual cortex, and prefrontal cortex. In the time interval, 0-1 s group showed 24.2 % of average classification accuracy, which was significantly higher than the 1-2 s group in the prefrontal cortex. In the three different regions, the classification accuracy of the prefrontal cortex showed significantly higher performance than the visual cortex in 0-1 s interval group, implying the cognitive arousal during the visual imagery. This finding would provide crucial information in improving the decoding performance.Comment: 5 pages, 4 figures, 3 table

    Reconstructing ERP Signals Using Generative Adversarial Networks for Mobile Brain-Machine Interface

    Full text link
    Practical brain-machine interfaces have been widely studied to accurately detect human intentions using brain signals in the real world. However, the electroencephalography (EEG) signals are distorted owing to the artifacts such as walking and head movement, so brain signals may be large in amplitude rather than desired EEG signals. Due to these artifacts, detecting accurately human intention in the mobile environment is challenging. In this paper, we proposed the reconstruction framework based on generative adversarial networks using the event-related potentials (ERP) during walking. We used a pre-trained convolutional encoder to represent latent variables and reconstructed ERP through the generative model which shape similar to the opposite of encoder. Finally, the ERP was classified using the discriminative model to demonstrate the validity of our proposed framework. As a result, the reconstructed signals had important components such as N200 and P300 similar to ERP during standing. The accuracy of reconstructed EEG was similar to raw noisy EEG signals during walking. The signal-to-noise ratio of reconstructed EEG was significantly increased as 1.3. The loss of the generative model was 0.6301, which is comparatively low, which means training generative model had high performance. The reconstructed ERP consequentially showed an improvement in classification performance during walking through the effects of noise reduction. The proposed framework could help recognize human intention based on the brain-machine interface even in the mobile environment.Comment: Submitted to IEEE International Conference on System, Man, and Cybernetics (SMC 2020

    Classification of Imagined Speech Using Siamese Neural Network

    Full text link
    Imagined speech is spotlighted as a new trend in the brain-machine interface due to its application as an intuitive communication tool. However, previous studies have shown low classification performance, therefore its use in real-life is not feasible. In addition, no suitable method to analyze it has been found. Recently, deep learning algorithms have been applied to this paradigm. However, due to the small amount of data, the increase in classification performance is limited. To tackle these issues, in this study, we proposed an end-to-end framework using Siamese neural network encoder, which learns the discriminant features by considering the distance between classes. The imagined words (e.g., arriba (up), abajo (down), derecha (right), izquierda (left), adelante (forward), and atr\'as (backward)) were classified using the raw electroencephalography (EEG) signals. We obtained a 6-class classification accuracy of 31.40% for imagined speech, which significantly outperformed other methods. This was possible because the Siamese neural network, which increases the distance between dissimilar samples while decreasing the distance between similar samples, was used. In this regard, our method can learn discriminant features from a small dataset. The proposed framework would help to increase the classification performance of imagined speech for a small amount of data and implement an intuitive communication system
    • …
    corecore